Deep Co-Space: Sample Mining Across Feature Transformation for Semi-Supervised Learning
نویسندگان
چکیده
منابع مشابه
Deep Co-Space: Sample Mining Across Feature Transformation for Semi-Supervised Learning
Aiming at improving performance of visual classification in a cost-effective manner, this paper proposes an incremental semi-supervised learning paradigm called Deep CoSpace (DCS). Unlike many conventional semi-supervised learning methods usually performing within a fixed feature space, our DCS gradually propagates information from labeled samples to unlabeled ones along with deep feature learn...
متن کاملSemi-Supervised Feature Transformation for Dependency Parsing
In current dependency parsing models, conventional features (i.e. base features) defined over surface words and part-of-speech tags in a relatively high-dimensional feature space may suffer from the data sparseness problem and thus exhibit less discriminative power on unseen data. In this paper, we propose a novel semi-supervised approach to addressing the problem by transforming the base featu...
متن کاملSemi-supervised deep kernel learning
Deep learning techniques have led to massive improvements in recent years, but large amounts of labeled data are typically required to learn these complex models. We present a semi-supervised approach for training deep models that combines the feature learning capabilities of neural networks with the probabilistic modeling of Gaussian processes and demonstrate that unlabeled data can significan...
متن کاملSemi-Supervised Feature Transformation for Tissue Image Classification
Various systems have been proposed to support biological image analysis, with the intent of decreasing false annotations and reducing the heavy burden on biologists. These systems generally comprise a feature extraction method and a classification method. Task-oriented methods for feature extraction leverage characteristic images for each problem, and they are very effective at improving the cl...
متن کاملFinite sample analysis of semi-supervised learning
Empirical evidence shows that in favorable situations semi-supervised learning (SSL) algorithms can capitalize on the abundance of unlabeled training data to improve the performance of a learning task, in the sense that fewer labeled training data are needed to achieve a target error bound. However, in other situations unlabeled data do not seem to help. Recent attempts at theoretically charact...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Circuits and Systems for Video Technology
سال: 2018
ISSN: 1051-8215,1558-2205
DOI: 10.1109/tcsvt.2017.2710478